基于自我监督的变压器模型,例如WAV2VEC 2.0和Hubert,对现有的自动语音识别方法(ASR)产生了重大改进。当用可用标记的数据进行微调时,在许多语言的基于WAV2VEC 2.0预验证的XLSR-53模型的性能中很明显。但是,鉴定这些模型的性能可能取决于预训练数据集中包含的语言或类似语言数据的数量。在本文中,我们使用几种低资源语言的XLSR-53预告片预测模型进行了持续预处理(COPT)。 COPT比半监督训练(SST)更有效,这是使用ASR中未标记数据的标准方法,因为它忽略了对未标记数据的伪标记的需求。我们在单词错误率(WERS)中显示了COPT结果,等于或稍好于使用SST。此外,我们表明,使用COPT模型进行伪标记,并在SST中使用这些标签,从而进一步改善了WER。
translated by 谷歌翻译
We introduce camouflaged data poisoning attacks, a new attack vector that arises in the context of machine unlearning and other settings when model retraining may be induced. An adversary first adds a few carefully crafted points to the training dataset such that the impact on the model's predictions is minimal. The adversary subsequently triggers a request to remove a subset of the introduced points at which point the attack is unleashed and the model's predictions are negatively affected. In particular, we consider clean-label targeted attacks (in which the goal is to cause the model to misclassify a specific test point) on datasets including CIFAR-10, Imagenette, and Imagewoof. This attack is realized by constructing camouflage datapoints that mask the effect of a poisoned dataset.
translated by 谷歌翻译
We study discrete distribution estimation under user-level local differential privacy (LDP). In user-level $\varepsilon$-LDP, each user has $m\ge1$ samples and the privacy of all $m$ samples must be preserved simultaneously. We resolve the following dilemma: While on the one hand having more samples per user should provide more information about the underlying distribution, on the other hand, guaranteeing the privacy of all $m$ samples should make the estimation task more difficult. We obtain tight bounds for this problem under almost all parameter regimes. Perhaps surprisingly, we show that in suitable parameter regimes, having $m$ samples per user is equivalent to having $m$ times more users, each with only one sample. Our results demonstrate interesting phase transitions for $m$ and the privacy parameter $\varepsilon$ in the estimation risk. Finally, connecting with recent results on shuffled DP, we show that combined with random shuffling, our algorithm leads to optimal error guarantees (up to logarithmic factors) under the central model of user-level DP in certain parameter regimes. We provide several simulations to verify our theoretical findings.
translated by 谷歌翻译
我们研究了$ N $节点上稳健地估计参数$ P $'ENY ACLY图的问题,其中$ \ gamma $小点的节点可能是对抗的。在展示规范估计器的缺陷之后,我们设计了一种计算上有效的频谱算法,估计$ P $高精度$ \ tilde o(\ sqrt {p(1-p)} / n + \ gamma \ sqrt {p(1-p)} / \ sqrt {n} + \ gamma / n)$ for $ \ gamma <1/60 $。此外,我们为所有$ \ Gamma <1/2 $,信息定理限制提供了一种效率低下的算法。最后,我们证明了几乎匹配的统计下限,表明我们的算法的错误是最佳的对数因子。
translated by 谷歌翻译